The manipulation pipeline has been released; THIS DEMO PAGE IS NOW DEPRECATED. For current information on the manipulation pipeline, and information about how to execute pick and place tasks using ROS and the PR2 see the pr2_tabletop_manipulation_apps stack page and the pr2_pick_and_place_demos package.

(!) Please ask about problems and questions regarding this tutorial on answers.ros.org. Don't forget to include in your question the link to this page, the versions of your OS & ROS, and also add appropriate tags.

One-line key points to mention during the manipulation demo

Description: A simpler form of the main demo page which describes all the functionality in detail. Think of it as a "cheat-sheet" for remembering the main points that the audience might be interested in.

Tutorial Level:

  • system-wide
    • complex behavior, requiring many software and hardware components
    • if you think you can do better on one module, we'd love for you to publish code that does so
  • high-level
    • we can pick up both known and previously unseen objects
    • operation is collision free, in a largely unstructured environment
  • object detection
    • we use point clouds from narrow stereo for object detection / recognition
    • we assume objects are sitting on a table and are not cluttered
    • we use a clustering algorithm to segment point clusters from individual objects
    • we use a simple fitting method to recognize known objects
  • grasp point selection
    • grasps for recognized objects are retrieved from a database
      • the database will be released soon
      • it contains models for objects that can be easily obtained from large retailers
    • grasps for unknown objects are computed on the fly
  • environment sensing
    • we use the tilting laser to build a Collision Map of the environment
    • robot body parts are filtered out of the Collision Map
    • the objects detected using the narrow stereo are explicitly added to this Map
    • the robot can also reason about avoiding collisions while holding an object
  • motion planning
    • our motion planner is an RRT-variant
    • it uses the Collision Map to avoid collision with the environment
  • grasp execution
    • we use the motion planner to a pre-grasp
    • we use interpolated IK to get from pre-grasp to grasp
    • we use interpolated IK to lift the object slightly from the table
    • we use the motion planner to move the arm (holding the object)
  • reactive grasping
    • tactile sensors in the fingertips are used to correct for errors
    • happens during the motion from pre-grasp to grasp, and to adjust the grasp

Wiki: icra_manipulation_demo/Tutorials/One-line key points to mention during the demo (last edited 2010-08-03 23:00:04 by MateiCiocarlie)